Skip to content

[RL] support bf16 moe tp permute, group_gemm, unpermute#7189

Open
ckl117 wants to merge 3 commits intoPaddlePaddle:developfrom
ckl117:dev_bf16_moe_deepgemm
Open

[RL] support bf16 moe tp permute, group_gemm, unpermute#7189
ckl117 wants to merge 3 commits intoPaddlePaddle:developfrom
ckl117:dev_bf16_moe_deepgemm

Conversation

@ckl117
Copy link
Copy Markdown
Collaborator

@ckl117 ckl117 commented Apr 3, 2026

Motivation

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

Modifications

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link
Copy Markdown

paddle-bot bot commented Apr 3, 2026

Thanks for your contribution!

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Apr 3, 2026

Codecov Report

❌ Patch coverage is 52.94118% with 8 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@5c9fa43). Learn more about missing BASE report.

Files with missing lines Patch % Lines
...l_executor/layers/moe/fused_moe_cutlass_backend.py 52.94% 8 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #7189   +/-   ##
==========================================
  Coverage           ?   73.44%           
==========================================
  Files              ?      383           
  Lines              ?    53640           
  Branches           ?     8411           
==========================================
  Hits               ?    39398           
  Misses             ?    11566           
  Partials           ?     2676           
Flag Coverage Δ
GPU 73.44% <52.94%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

fastdeploy-bot

This comment was marked as outdated.

Copy link
Copy Markdown

@fastdeploy-bot fastdeploy-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 AI Code Review | 2026-04-10 18:09 CST

📋 Review 摘要

PR 概述:支持 bf16 MoE 的 tp permute、group_gemm、unpermute 功能,使用 paddlefleet_ops.deep_gemm.m_grouped_bf16_gemm_nn_contiguous 替代原有的 compute_ffn 路径。

变更范围model_executor/layers/moe/fused_moe_cutlass_backend.py

影响面 Tag[RL] [OP]

📝 PR 规范检查

PR 标题包含 [RL] Tag,符合规范。

但 PR 描述中的 MotivationModifications 部分未填写,建议补充:

标题建议(已符合规范,可保持不变):

  • [RL] support bf16 moe tp permute, group_gemm, unpermute

描述模板(建议补充):

## Motivation

优化 bf16 MoE 在 TP 模式下的性能,通过使用 `paddlefleet_ops.deep_gemm.m_grouped_bf16_gemm_nn_contiguous``fused_swiglu_scale` 算子替代原有的 `moe_expert_ffn` 路径。

## Modifications

1. 新增 `m_grouped_bf16_gemm_nn_contiguous` 函数封装 `paddlefleet_ops.deep_gemm.m_grouped_bf16_gemm_nn_contiguous`
2.`apply_ep_prefill``apply_tp` 中,当 `FD_USE_PHI_MOE_PERMUTE=True``moe_quant_type=="w16a16"` 时:
   - 使用 `moe_permute(return_expert_indices=True)` 获取 `expert_idx_per_token`
   - 使用 `m_grouped_bf16_gemm_nn_contiguous` + `fused_swiglu_scale` 替代 `compute_ffn`
   - 使用 `moe_unpermute(using_weighted_combine=False)`
3.`apply_tp` 中新增对 `layer.with_bias` 的支持
4. 移除不再使用的 `count_tokens_per_expert_func` 导入

问题

级别 文件 概述
🟡 建议 fastdeploy/model_executor/layers/moe/fused_moe_cutlass_backend.py:181 apply_ep_prefill 中缺少对 layer.with_bias 的处理

总体评价

代码整体思路清晰,使用 paddlefleet_ops.deep_gemm 的新算子替代原有实现以优化性能。新参数 return_expert_indicesusing_weighted_combine=False 在其他后端文件中已有使用,符合设计模式。

建议补充 PR 描述,并修复 apply_ep_prefill 中 bias 处理的不一致性问题。

out = paddlefleet_ops.fused_swiglu_scale(out, dst_weights)
ffn_out = m_grouped_bf16_gemm_nn_contiguous(
out, getattr(layer, self.added_weight_attrs[1]), expert_idx_per_token
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 建议 apply_ep_prefill 中缺少对 layer.with_bias 的处理

apply_ep_prefill 的新代码路径(FD_USE_PHI_MOE_PERMUTE and self.moe_quant_type == "w16a16")中,没有处理 layer.with_bias 的情况,与 apply_tp 中的处理不一致。

原始 compute_ffn 方法在第 115-117 行有 bias 处理逻辑,apply_tp 在第 370-372 行也有相同的处理,建议在 apply_ep_prefill 中添加相同的 bias 处理代码以保证一致性。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants